In practice, identifying which type of missing data you’ve got is very difficult, requiring a combination of exploratory analysis, domain-expertise, and good judgement.
The {mice} and {ggmice} R packages offer functions for carrying out exploratory analysis of missing values in a dataset.
The ggmice::plot_pattern() function is especially useful for inspecting the missing values in a dataset.
It is also possible to model missingness using logistic regressions, treating missingness as a binary outcome, and including the rest of the dataset as explanatory variables.
However, none of these methods will give a definitive answer.
The starting assumption should be that data is MNAR (Errickson 2017).
Dealing With Missing Data
How to Handle Missing Data in an Analysis
Common Approaches
The best solution for missing data is to find the data, whether by data collection or processing, or theory-driven inference.
When this is not possible there are two broad approaches to dealing with missing values:
The right approach is highly dependent on the nature of the missing data.
Dealing with missing data requires understanding why it is missing first!
Deletion Methods
Listwise Deletion
Removing any observations (rows) that contain missing values for any relevant variables.
Analysis carried out on complete cases only.
Pairwise deletion
Removing any missing values, but not the entire observation (row).
Means and covariances are calculated on all observed, and these can be used to build statistical models (Van Buuren 2018).
When data is MCAR and the volumes of data that are missing is not an issue, listwise deletion is suitable.
When data is MCAR but volumes of data are a concern, pairwise deletion may be a better choice.
When data is MAR, deletion methods may still be valid under weaker assumptions (Errickson 2017).
Imputation Methods
Imputation involves replacing missing values with values inferred from the rest of the data.
There are two types of imputation:
Single Imputation - Replacing missing values with a single value, estimated from some statistical procedure.
Multiple Imputation - Creating multiple datasets each replacing missing values with plausible estimated values and pooling estimates from analyses carried out on each dataset.
Imputation methods are valid whether data is MCAR, MAR, or MNAR.
Simple Imputation
There are lots of simple procedures for imputing missing values, usually involving replacing all missing values with the variable’s average value1.
Another common method is to impute a constant value in place of missing values (such as 99, as often seen in survey data).
These methods are almost always a bad idea, because they have many flaws:
Distorts the variable’s distribution, underestimating it’s variance (Van Buuren 2018).
Disrupts the relationship between the variable with imputed values and all other variables (Nguyen 2020).
It is important to know that these simple imputation methods exist, but the circumstances where using them is valid are very few!
Regression Without Imputation
plot_regression <-function(data) { data |>ggplot(aes(x = total_working_years, y = monthly_income)) +geom_point(shape =21, size =1.5, alpha = .5) +geom_smooth(method = lm, colour ="#005EB8",fill ="#005EB8", alpha = .5 ) +labs(x ="Total Working Years", y ="Monthly Income",title ="Monthly Income ~ Total Length of Career" ) }attrition |>plot_regression()
Mean Imputation
set.seed(123)missing_years <- attrition |>mutate(total_working_years =replace( total_working_years,runif(n()) <0.8& (job_level <=2| age >35), NA ) )missing_years |> mice::mice(method ="mean", m =1,maxit =1, print =FALSE ) |> mice::complete() |>plot_regression()
Regression Imputation
A more robust approach to single imputation is to estimate missing values using a predictive model of the variable in question, using the rest of the variables in the dataset.
Regression imputation can rely on a variety of algorithms, based on the type of data being imputed and how complex the model should be.
Common algorithms for regression imputation include linear and logistic regression, k-nearest neighbours, and random forest.
Van Buuren (2018) argues that regression imputation methods are the “most dangerous” of all the methods for handling missing data, because they give false confidence in the result, despite correlations being biased upwards and variance underestimated.
Imputing one value for a missing datum cannot be correct in general, because we don’t know what value to impute with certainty (if we did, it wouldn’t be missing) (Rubin 1987).
Multiple Imputation
Multiple imputation involves generating multiple datasets, performing analysis on each, and pooling the results. This is a two-stage process:
Generate multiple completed datasets, filling missing values using a statistical model that estimates imputation values, plus a random component to capture the uncertainty in the estimate.
Compute estimates on each completed dataset before combining them as pooled estimates and standard errors, using Rubin (1987)’s formula (Murray 2018).
The methods used for each stage may differ, but this two-stage approach is generally consistent across all forms of multiple imputation.
This approach acknowledges the uncertainty in the imputation of missing values, and bakes that uncertainty into the process, instead of treating imputed values with equal weight/certainty as non-missing values.
set.seed(123)missing_income <- attrition |>mutate(monthly_income =replace(monthly_income, runif(n()) <0.8& (job_level <=2| total_working_years >10), NA),total_working_years =replace(total_working_years, runif(n()) <0.5& job_level >=3, NA))get_pooled_estimates <-function(data, method, m, maxit) { data |> mice::mice(method = method, m = m, maxit = maxit, print =FALSE, seed =123) |>with(glm(factor(attrition) ~ arm::rescale(monthly_income) + total_working_years, family ="binomial")) |> mice::pool() }no_imp <-glm(factor(attrition) ~ arm::rescale(monthly_income) + total_working_years, family ="binomial", data = attrition)mean_imp <- missing_income |>get_pooled_estimates(method ="mean", m =1, maxit =1)norm_imp <- missing_income |>get_pooled_estimates(method ="norm.predict", m =1, maxit =1)pmm_imp <- missing_income |>get_pooled_estimates(method ="pmm", m =50, maxit =20)
Regression Estimates
models <-list("No Imputation"= no_imp,"Mean"= mean_imp,"Regression"= norm_imp,"Predictive Mean Matching"= pmm_imp)cm <-c("(Intercept)"="(Intercept)","arm::rescale(monthly_income)"="Monthly Income","total_working_years"="Total Working Years")modelsummary::modelsummary( models, exponentiate =TRUE, output ="gt",coef_map = cm, gof_omit ="IC|Log|F|RMSE",title ="Logstic Regressions of Job Attrition" ) |> gt::tab_spanner(label ="Single Imputation", columns =3:4) |> gt::tab_spanner(label ="Multiple Imputation", columns =5)
Logstic Regressions of Job Attrition
No Imputation
Single Imputation
Multiple Imputation
Mean
Regression
Predictive Mean Matching
(Intercept)
0.301
0.423
0.268
0.303
(0.058)
(0.062)
(0.056)
(0.072)
Monthly Income
0.550
0.865
0.444
0.564
(0.154)
(0.142)
(0.134)
(0.227)
Total Working Years
0.950
0.913
0.959
0.949
(0.016)
(0.014)
(0.017)
(0.019)
Num.Obs.
1470
1470
1470
1470
Num.Imp.
50
Conclusion
Not dealing with missing values is a methodological choice, because any tools for computing statistical models will deal with those missing values in some way (usually listwise deletion), and this has consequences.
How missing values should be handled depends on the nature, and potentially the volume, of the missingness (MCAR, MAR, MNAR)
Simple imputation methods are quick and easy but are often extremely flawed. Mean imputation is the most common example of a bad imputation method, but regression imputation is potentially even worse!
The best solution for missing values is to find them. Failing that, listwise deletion can be appropriate when data is MCAR, and multiple imputation is valid across all types of missingness.
Further Resources
Packages for carrying out single & multiple imputation in R & Python: